In [1]:
# standard imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# using seaborn plotting
import seaborn as sns;
sns.set_style('white')
sns.set_context('paper')
sns.plt.rcParams['figure.figsize'] = (15, 12)
In [4]:
simple = stats.uniform(loc=2, scale=3)
errscale = 0.25
err = stats.norm(loc=0, scale=errscale)
# cannot analytically convolve continuous PDFs in general.
# so we now make a probability mass function on a fine grid for fft convolution
delta = 1e-4
big_grid = np.arange(-10, 10, delta)
pmf1 = simple.pdf(big_grid) * delta
pmf2 = err.pdf(big_grid) * delta
from scipy import signal
conv_pmf = signal.fftconvolve(pmf1, pmf2, 'same')
In [5]:
print("Grid length, sum(gauss_pmf), sum(uni_pmf), sum(conv_pmf):")
print(len(big_grid), sum(err.pdf(big_grid) * delta), sum(simple.pdf(big_grid) * delta), sum(conv_pmf))
conv_pmf = conv_pmf / sum(conv_pmf)
plt.plot(big_grid,pmf1, label='Tophat')
plt.plot(big_grid,pmf2, label='Gaussian error')
plt.plot(big_grid,conv_pmf, label='Sum')
plt.xlim(-3,max(big_grid))
plt.legend(loc='best'), plt.suptitle('PMFs')
Out[5]:
The algorithm assumes that one knows the claim count distribution, T (the probability distribution of the number of claims that will occur), and the severity of a single claim S (the distribution of the amount of a single claim).
The algorithm computes the aggregate loss distribution, $AGG = S_1 + S_2 + ... + S_T$
(the distribution of the total number of claims). The algorithm applies to arbitrary frequency and severity distributions.
Suppose a claim count distribution, a severity distribution, and the corresponding aggregate distribution are specified, In regard to the severity distribution. suppose further that the probability of any given claim being excess of a given attachment point 11 is (x. Suppose it is desired to compute the aggregate distribution l’or claims excess of ,4 (this A has nothing to do with the vector of spreads A used previously). For example, the aggregate distribution might be based on a Poisson claim count distribution with parameter 1.000 (i.e., the number of expected claims is 1,000) and a Weibull severity distribution with mean ri; 10.000 and coefficient of variation 8. If A is $100,000 then o! is 0.0 197.
Perhaps the simplest and often most direct approach is Monte Carlo simulation. The Monte Carlo methid involves the following steps:
To illustrate the application of this method, let's see the following example. It generates a aggregate loss distribution using a Poisson frequency of loss model and a Weibull severity of loss model. Enter parameter values of each distribution and the number of simulations required.
Algorithm for Simulation
Doing this you get a sample of losses $L_1,…,L_K$ and you can do all sorts of hisograms, density fits calculations.
EDIT: on this sample you could try to fit a loss distribution (e.g. Gamma or translated Gamma see here) by maximum likelihood or method of moments. But you can apply the method of moments even without MC becauase if you assume that the number of losses and the loss sizes are independent then $E[L]=E[N]* E[X]$ and $V[L]=E[N] * V[X]+E[X]^2 * V[N]$
In [64]:
from scipy import stats
simulations = 5000
poisson_lambda = 10
weibull_alpha = 1.79
frequency = stats.poisson(poisson_lambda).rvs(size=simulations)
severity = stats.weibull_min(weibull_alpha).rvs(size=simulations)
In [65]:
sns.distplot(frequency, kde=False)
sns.plt.title(("Frequency distribution: Poisson Distribution with $\lambda$ = %g" % poisson_lambda), fontsize=20)
Out[65]:
In [66]:
sns.distplot(severity, kde=False)
sns.plt.title(("Severity distribution: Weibull Distribution with $\\alpha$ = %g" % weibull_alpha), fontsize=20)
Out[66]:
In [67]:
loss = []
for i in range(simulations):
n_claims = frequency[i]
severity_run = stats.weibull_min(weibull_alpha).rvs(size=n_claims)
loss.append(np.sum(severity_run))
In [68]:
sns.distplot(loss, kde=False)
sns.plt.title(("Aggregate Loss distribution"), fontsize=20)
Out[68]:
In [80]:
!ls
In [71]:
Out[71]:
In [ ]: